EEdesign Home Register About EEdesign Feedback Contact Us The EET Network
eLibrary


 Online Editions
 EE TIMES
 EE TIMES ASIA
 EE TIMES CHINA
 EE TIMES KOREA
 EE TIMES TAIWAN
 EE TIMES UK

 Web Sites
CommsDesign
   GaAsNET.com
   iApplianceWeb.com
Microwave Engineering
EEdesign
   Deepchip.com
   Design & Reuse
Embedded.com
Elektronik i Norden
Planet Analog
Semiconductor
    Business News
The Work Circuit
TWC on Campus


ChipCenter
EBN
EBN China
Electronics Express
NetSeminar Services
QuestLink


August 8, 2002



Verification reuse ensures predictable design

By Will Walker
Integrated System Design

January 3, 2002 (5:42 p.m. EST)

Your system-on-chip is made up of well-designed modules. The crack design team has covered every base, verifying each piece. Now you're integrating the modules into subassemblies and into a complete system. How do you make your integrated system as solid and complete as the modules? And how do you do it fast? After this you want to put together another chip, incrementally different from this one, and not feel like you're starting over. In design reuse, verification is where the rubber hits the road. We all know how to make our architecture modular. We know to standardize interfaces and use compatible back-end flows. It's verification that brings us screeching to a halt. Yes, my module and your module have great testbenches and great test suites. But when I put them together, I need a new testbench and all-new tests. At each integration step we feel like we're starting over. Verification is not scaling up like other design tasks. That's why as designs get bigger and bigger, the percentage of our time spent on verification is going up and up. This article will describe some ways to make your verification efforts reusable, so that the integration stages of your project will be predictable efforts instead of schedule black holes. I will use the National Semiconductor Geode GX2 as an example. Many of these ideas were developed specifically for the Geode series of integrated processors, and the GX2 project proved to be the most successful application yet of this approach.

There are many pieces to a verification environment-things like testbench components, test suites, random test generators, test plans, assertions and coverage analysis. If you want to blast open the integration bottleneck, all these things also have to be usable at the next level up in the hierarchy. You spend more of your time creating and tuning these verification pieces than you do designing the module itself. It'd be crazy to leave that part of the work out of the reuse equation.

There is nothing easy or free about verification reuse. It is a lot of extra work to make a verification environment that isn't just good but is also reusable. But the payoff is big. In each section below I first describe how things are verified at the module level. Then I show how the module verification work can be reused when the time comes to verify combinations of modules.

Testbench components
A testbench has pieces to stimulate your design and to observe and evaluate its behavior. Testbenches are necessarily specific to the module under test. But we found that by organizing our testbenches in a certain canonical way, and by standardizing the way the parts of the testbench communicate with one another, we were able to make those parts usable at the next integration level.

A canonical testbench for a single module has a test reader, a transactor, a watcher, an emulator and a checker. The test reader reads the test language and converts it into a series of commands to the transactor. The transactor drives the input signals of the module being tested. The watcher observes the behavior of the module and reports it as a series of events.

The emulator is the reference model of the design. We wrote our emulators in C++. They are transaction-accurate, not cycle-accurate. The emulator sees the same series of commands that go into the transactor. It produces a series of events showing how we think the module should behave.

The checker sees both the stream of events from the emulator and the stream of events coming from the watcher. It compares the two and reports mismatches as errors.

Many designers are used to watchers and transactors in their testbenches but will see emulators and checkers as a lot of extra work. It would be easier to write self-checking tests or use assertions and smart watchers to find errors. We have found that writing an emulator and a dumb watcher and dumb checker is often easier than writing a smart watcher. And it frees your tests from checking themselves, which makes random testing much easier.

But the real power of this particular way of organizing a testbench comes at integration. I can now slap together a testbench that has my neighbor module's emulator driving my transactor instead of my tests driving my transactor. Then I can run all of that module's tests unchanged on my module. Or I can instantiate both modules, with both watchers, both emulators and both checkers. If my input is point-to-point from the other module then I just remove my transactor and run the other module's tests. If my input is a multidriver bus, then I can even leave both transactors in and play two tests at the same time, one chosen from the test suite of each module. By doing some extra work on the module-level testbench, I now have parts that I can also use in the integration testbench.

This reusability extends all the way up the hierarchy. By mixing and matching emulators and transactors you can test whatever combination of modules you need to test without having to design new testbench components.

The Geode GX2 design team used a testbench for each major module, such as the memory controller, PCI interface, processor core and display controller (see figure, page 10). Then we combined the parts to make testbenches for combinations of modules. By the end of the project we were using 40 different testbenches, but they all shared components.

Test plan
Every verification environment should have a test plan. I suggest that you write it down. A test plan is like exercise: We know we should do it but sometimes we just try not to think about it because it's so . . . manual. Then get the whole team together and brainstorm ideas for tests. Do it early and often. The test plan needs to be reusable, too. We chose a simple format that allowed us to describe corner cases and then mark them as having been covered or not, and by which test. When you use a common language for test plans, you can have tools that track your progress against the plan and you can roll up the test plans as you go up the integration hierarchy.

The painful, unavoidable truth of verification is that there is no substitute for carefully crafted handwritten tests. They are labor-intensive and tedious to produce and to tune. But the person who can create a test to expose a tricky corner case is next year's great logic designer. In fact, the module designers should be writing most of the tests. There is an important role for both black-box testing and white-box testing in digital design.

We designed a C++ application program interface to be our test language for peripheral modules and X86 Assembly for the processor core. But your specific test language is something you have to create for your type of design. It is important to have as few different test languages as possible-ideally there should be just one. If multiple modules can use the same language for their tests then you have saved a lot of work when the time comes to run those tests on hardware, or on next year's design, with its new combination of modules.

Random tests are a critical part of verification. For the Geode GX2 we wrote thousands of tests and generated thousands more random tests. We randomized everything we could think of: interrupts, suspends, debug interruptions, snoops, memory latencies, I/O latencies, arbitration styles, and on and on. Any time there are two ways of doing something, make sure you do both, with some appropriate weighting between the two.

There are some things that only a logic designer knows about his or her design. They need to encode this knowledge as assertions; then it will be checked automatically during every simulation. An assertion is like a mini-watcher, it checks some condition and causes the test to fail if it's violated. You might assert that two signals never fire at the same time or that one signal never fires within six cycles of the other, or that a FIFO never underflows. No matter how well we specify and document a digital interface, we also always make assumptions about how the interface is used, and clear those assumptions, often verbally, with the designer on the other side. We also should take the extra step of writing that assumption into the design as an assertion. Then two years later when someone you don't even know is using your design and the interface protocol changes slightly, your assertion will go off and they can go right to the problem.

Some designers love assertions because they make debug so easy. Some have to be cajoled into using them. It is well worth it. They make your design more robust. They are trivially reusable at the integration level because they are part of the module register-transfer-level logic. Another benefit of assertions is that they can be fed to static rule checkers. We used equivalence checkers on Geode GX2 but not static rule checkers. Formal verification tools are advancing rapidly in features and in the complexity they can handle. Even with a good test plan, a good testbench and random tests, your verification is an open loop without coverage analysis. It's coverage analysis that tells you how good a job you are doing and shows you which things still need attention. The most mature coverage tools are the code-coverage tools. They check that all the code you wrote is being used, or that every signal in your design is wiggling, or that every flop in your design flipped. The Geode GX2 team used code-coverage tools with varying results in the 90 to 99 percent range. We made plans for a functional coverage tool but did not have the time and resources to implement it.

In conclusion, it's not enough to test a module in its own environment and then integrate it, run a few top-level tests, and say you're done. You have to thoroughly test every module again at the integrated level with reuse of verification components: the tests, the test generators, the testbench devices, the coverage analysis, and so on. When you can reuse all that work, you give yourself a fighting chance of making that integration just an integration and not a whole new design.

---

Will Walker is a senior engineering manager at National Semiconductor Corp. (Longmont, Colo.). He holds a BSEE from Rice University and an MSEE from Stanford University. He designed RISC processors at Hewlett-Packard for nine years, then moved on in 1994 to design X86 processors at Cyrix and NSC.

http://www.isdmag.com

© 2002 CMP Media LLC.
1/1/02, Issue # 14151, page 6.